全文获取类型
收费全文 | 8039篇 |
免费 | 1059篇 |
国内免费 | 595篇 |
专业分类
电工技术 | 1154篇 |
综合类 | 622篇 |
化学工业 | 385篇 |
金属工艺 | 85篇 |
机械仪表 | 572篇 |
建筑科学 | 375篇 |
矿业工程 | 283篇 |
能源动力 | 98篇 |
轻工业 | 343篇 |
水利工程 | 142篇 |
石油天然气 | 176篇 |
武器工业 | 85篇 |
无线电 | 1403篇 |
一般工业技术 | 690篇 |
冶金工业 | 279篇 |
原子能技术 | 140篇 |
自动化技术 | 2861篇 |
出版年
2024年 | 40篇 |
2023年 | 195篇 |
2022年 | 182篇 |
2021年 | 240篇 |
2020年 | 425篇 |
2019年 | 368篇 |
2018年 | 272篇 |
2017年 | 294篇 |
2016年 | 363篇 |
2015年 | 345篇 |
2014年 | 495篇 |
2013年 | 581篇 |
2012年 | 595篇 |
2011年 | 578篇 |
2010年 | 450篇 |
2009年 | 444篇 |
2008年 | 451篇 |
2007年 | 528篇 |
2006年 | 457篇 |
2005年 | 401篇 |
2004年 | 317篇 |
2003年 | 272篇 |
2002年 | 231篇 |
2001年 | 192篇 |
2000年 | 169篇 |
1999年 | 134篇 |
1998年 | 99篇 |
1997年 | 92篇 |
1996年 | 73篇 |
1995年 | 71篇 |
1994年 | 49篇 |
1993年 | 54篇 |
1992年 | 39篇 |
1991年 | 29篇 |
1990年 | 30篇 |
1989年 | 22篇 |
1988年 | 17篇 |
1987年 | 11篇 |
1986年 | 18篇 |
1985年 | 12篇 |
1984年 | 7篇 |
1983年 | 7篇 |
1982年 | 7篇 |
1981年 | 5篇 |
1980年 | 8篇 |
1979年 | 4篇 |
1976年 | 4篇 |
1959年 | 5篇 |
1958年 | 2篇 |
1955年 | 2篇 |
排序方式: 共有9693条查询结果,搜索用时 0 毫秒
101.
在直接处理点云的三维神经网络中,采样阶段实现了对原始点云中关键点的筛选,对于整个网络的性能及网络的抗噪能力具有重要作用。目前主流的最远点采样(FPS)方法在处理大规模3D点云数据时计算量大且耗时,并且低采样率时经过FPS采样后模型性能下降明显。针对这两个问题,提出一种面向低采样率的点云数据处理网络AS-Net。设计一个新的采样模块代替原backbone中的FPS,其由两个Layer组成,每个Layer基于长短期记忆网络获取原始点云与采样点云之间的联系权重,从而高效提取关键信息,去除冗余信息。在此基础上,利用注意力机制选择特征值较高的原始点云作为采样点,采样点作为后序任务的关键点输入到网络,进一步提高网络模型性能。基于ModelNet40数据集的实验结果表明,在低采样率条件下,AS-Net仍可达到81.6%的分类准确率,与使用FPS作为采样方法的网络模型相比提高52.7%。此外,其对噪声干扰具有很强的鲁棒性,对于大场景的分割时间效率优于同类采样方法。 相似文献
102.
Data augmentation (DA) is a ubiquitous approach for several text generation tasks. Intuitively, in the machine translation paradigm, especially in low-resource languages scenario, many DA methods have appeared. The most commonly used methods are building pseudocorpus by randomly sampling, omitting, or replacing some words in the text. However, previous approaches hardly guarantee the quality of augmented data. In this study, we try to augment the corpus by introducing a constrained sampling method. Additionally, we also build the evaluation framework to select higher quality data after augmentation. Namely, we use the discriminator submodel to mitigate syntactic and semantic errors to some extent. Experimental results show that our augmentation method consistently outperforms all the previous state-of-the-art methods on both small and large-scale corpora in eight language pairs from four corpora by 2.38–4.18 bilingual evaluation understudy points. 相似文献
103.
The proportion of non-conforming items has been traditionally utilised as an evaluation criterion for quality of items. However, the proportion of non-conforming items is not necessarily useful as a proper evaluation criterion for controlling high-quality manufacturing in recent years. Accordingly, in order to achieve further quality improvement and innovation, more careful quality evaluation has been required newly. Then, a concept of quality loss in the Taguchi methods has been devised as a severe criterion of quality evaluation. Hereby, a variable single sampling plan having desired operating characteristics (OCs) indexed by quality loss has been proposed in the area of statistical quality control. By the way, the most economical sampling inspection in the average sample number (ASN) is the sequential sampling plan based on the Wald’s sequential probability ratio test. Then, from the viewpoint of cost reduction, we discuss a variable sequential sampling plan having desired OC indexed by quality loss with the aim of expansion of the utility of variable sampling plan for quality loss. As the result, the design procedure of the sequential sampling plan for satisfying some required design conditions indexed by quality loss is provided. In addition, the effectiveness of the proposed sequential sampling plan is verified through some numerical examples. 相似文献
104.
Reducing the sampling rate to as low as possible is a high priority for many factories to reduce production cost. Automatic virtual metrology based intelligent sampling decision (ISD) scheme had been previously developed for reducing the sampling rate and sustaining the virtual metrology (VM) accuracy. However, the desired sampling rate of the ISD scheme is fixed and set manually. Hence, whenever the VM accuracy gets worse, it cannot adaptively increase the default sampling rate in the ISD scheme. As a consequence, it would take more time to collect enough samples for improving the VM accuracy. Moreover, when the VM accuracy performs well all the time, it cannot automatically decrease the default sampling rate in ISD, which may result in unnecessary waste. Accordingly, this paper proposes an automated sampling decision (ASD) scheme to adaptively and automatically modify the sampling rate online and in real time for continuous improvement. The ASD scheme can monitor the VM accuracy online as well as update the VM models in real time for maintaining the VM accuracy when the VM accuracy becomes poor. Also, the ASD scheme can automatically reduce the sampling rate while the VM accuracy performs well. 相似文献
105.
Space-filling and projective properties of design of computer experiments methods are desired features for metamodelling. To enable the production of high-quality sequential samples, this article presents a novel deterministic sequential maximin Latin hypercube design (LHD) method using successive local enumeration, notated as sequential-successive local enumeration (S-SLE). First, a mesh-mapping algorithm is proposed to map the positions of existing points into the new hyper-chessboard to ensure the projective property. According to the maximin distance criterion, new sequential samples are generated through successive local enumeration iterations to improve the space-filling uniformity. Through a number of comparative studies, several appealing merits of S-SLE are demonstrated: (1) S-SLE outperforms several existing LHD methods in terms of sequential sampling quality; (2) it is flexible and robust enough to produce high-quality multiple-stage sequential samples; and (3) the proposed method can improve the overall performance of sequential metamodel-based optimization algorithms. Thus, S-SLE is a promising sequential LHD method for metamodel-based optimization. 相似文献
106.
A Single Sampling Plan Based on Exponentially Weighted Moving Average Model for Linear Profiles 下载免费PDF全文
Fu‐Kwun Wang 《Quality and Reliability Engineering International》2016,32(5):1795-1805
The exponentially weighted moving average (EWMA) model has been successfully used in acceptance sampling plans. The EWMA model provides the quality information of the current lot and the preceding lots. In addition, a multiple dependent state (MDS) sampling plan considers the quality information of the preceding lots. In this study, we present two new sampling plans for linear profiles. One is based on EWMA model with yield index using the single sampling plan, and the other is based on EWMA model with yield index using the MDS sampling plans. The plan parameters are determined by a nonlinear optimization approach. As the smoothing parameter value equals to one, the first proposed plan becomes the traditional single sampling plan. In addition, we compare the proposed plans with the traditional single sampling plan. The results indicate that the MDS sampling plan based on EWMA model with yield index with smaller value of smoothing parameter performs better than the traditional single sampling plan and the single sampling plan based on EWMA model with yield index in terms of the sample size required. One real example is used to illustrate the proposed plan. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
107.
108.
考虑了动力失谐、质量失谐和摩擦失谐三种常见失谐现象,研究了空间站展开机构的失谐动态响应。利用多柔体系统动力学方法,在机械系统动力学自动分析(ADAMS)仿真平台下建立了空间站柔性展开机构的失谐模型,并用重要度抽样方法随机抽取失谐量,然后进行机构动学仿真,得到机构的失谐动态响应并进行统计。仿真研究证明,该方法可用较少的计算时间得到柔性展开机构的失谐动态响应随机分布特性。 相似文献
109.
目的探寻与构建互联网与智能化时代的产品设计思维与方法。方法以新设计语境条件下产品自身概念的重新定义为切入点,通过论述互联网与泛载智能时代产品开发设计所需的设计、大数据、社群、场景、四大思维及其融合关系,来探讨区别于传统语境下产品开发设计的新流程与方法。结论在产品软硬件高度融合,泛载智能成为主流的情况下,以社群构建为核心的用户全流程参与式设计,以场景思维置入的方式来评估产品的商业应用潜能,以快速测试、反馈迭代来减少试错周期与成本,逐渐成为互联网与智能化时代下新产品开发设计的核心。 相似文献
110.
This paper presents an innovative application of a new class of parallel interacting Markov chains Monte Carlo to solve the Bayesian history matching (BHM) problem. BHM consists of sampling a posterior distribution given by the Bayesian theorem. Markov chain Monte Carlo (MCMC) is well suited for sampling, in principle, any type of distribution; however the number of iteration required by the traditional single-chain MCMC can be prohibitive in BHM applications. Furthermore, history matching is typically a highly nonlinear inverse problem, which leads in very complex posterior distributions, characterized by many separated modes. Therefore, single chain can be trapped into a local mode. Parallel interacting chains is an interesting way to overcome this problem, as shown in this paper. In addition, we presented new approaches to define starting points for the parallel chains. For validation purposes, the proposed methodology is firstly applied in a simple but challenging cross section reservoir model with many modes in the posterior distribution. Afterwards, the application to a realistic case integrated to geostatistical modelling is also presented. The results showed that the combination of parallel interacting chain with the capabilities of distributed computing commonly available nowadays is very promising to solve the BHM problem. 相似文献